super-intelligent machine
Ministers not doing enough to control AI, says UK professor
One of the professors at the forefront of artificial intelligence has said ministers are not doing enough to protect against the dangers of super-intelligent machines in the future. In the latest contribution to the debate about the safety of the ever-quickening development of AI, Prof Stuart Russell told the Times that the government was reluctant to regulate the industry despite the concerns that the technology could get out of control and threaten the future of humanity. Russell, a lecturer at the University of California in Berkeley and former adviser to the US and UK governments, told the Times he was concerned that ChatGPT, which was released in November, could become part of a super-intelligent machine that could not be constrained. "How do you maintain power over entities more powerful than you – for ever?" he asked. "If you don't have an answer, then stop doing the research. "The stakes couldn't be higher: if we don't control our own civilisation, we have no say in whether we continue to exist." After the release of ChatGPT to the public last year, which has been used to write prose and has already worried lecturers and teachers over its use in universities and schools, the debate has intensified over its safety in the long-term. Elon Musk, the Tesla founder and Twitter owner, and the Apple co-founder Steve Wozniak, along with 1,000 AI experts, wrote a letter to warn that there was an "out-of-control race" going on at AI labs and called for a pause on the creation of giant-scale AI. The letter warned the labs were developing "ever more powerful digital minds that no one, not even their creators, can understand, predict or reliably control". There is also concern about its wider application. A House of Lords committee this week heard evidence from Sir Lawrence Freedman, a war studies professor, who spoke about the concerns on how AI might be used in future wars. Google's rival, Bard, is due to be released in the EU later this year. Russell himself previously worked for the UN on how to monitor the nuclear test-ban treaty, and was asked to work with Whitehall earlier this year. He said: "The Foreign Office … talked to a lot of people and they concluded that loss of control was a plausible and extremely high-significance outcome." "And then the government came out with a regulatory approach that says: 'Nothing to see here … we'll welcome the AI industry as if we were talking about making cars or something like that'.
- Europe > United Kingdom (0.57)
- North America > United States > California (0.26)
- Government > Military (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.57)
Worried about super-intelligent machines? They are already here
In the first of his four (stunning) Reith lectures on living with artificial intelligence, Prof Stuart Russell, of the University of California at Berkeley, began with an excerpt from a paper written by Alan Turing in 1950. Its title was Computing Machinery and Intelligence and in it Turing introduced many of the core ideas of what became the academic discipline of artificial intelligence (AI), including the sensation du jour of our own time, so-called machine learning. From this amazing text, Russell pulled one dramatic quote: "Once the machine thinking method had started, it would not take long to outstrip our feeble powers. At some stage therefore we should have to expect the machines to take control." This thought was more forcefully articulated by IJ Good, one of Turing's colleagues at Bletchley Park: "The first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
- North America > United States > California (0.25)
- Europe > United Kingdom > England > Buckinghamshire > Milton Keynes (0.25)
- Asia > China (0.05)
- Information Technology > Artificial Intelligence > History (1.00)
- Information Technology > Artificial Intelligence > Issues > Turing's Test (0.56)
Inside Elon Musk's Battle to Save Humanity From the A.I. Apocalypse
In these movies, intelligent machines eventually surpass their human creators, directly threatening the very existence of humanity. The dangers of A.I. have been a long-running trope in pop-culture. What was once considered to be a fascinating and looming threat has evolved into an overplayed cheesy artifact. In lieu of more pressing, immediate threats, super-intelligent machines are just not that scary anymore. On top of this, we are still a ways away from the next levels in A.I., with some prominent figures in research saying humans may never even get there. However, the answer to whether or not humans will birth superintelligence is not so cut and dry.
12 must-watch TED Talks on artificial intelligence - QAT Global
For all, you who are technology lovers, AI enthusiasts, and casual consumers with peaked interest, don't miss your chance to learn about the newest advancements in artificial intelligence and an opportunity to join the discussion on the ethics, logistics, and reality of super-intelligent machines. Explore the possibilities of super-intelligence improving our world and our everyday lives while you dive into this great list of TED Talks on artificial intelligence. We have compiled a list of the best TED Talks on AI, providing you with the information you seek on AI technological developments, innovation, and the future of AI. Here are the best TED Talks for anyone interested in AI. We hope you enjoy our list!
- Education > Educational Setting > Online (1.00)
- Education > Educational Setting > Continuing Education (1.00)
To Create A Super-Intelligent Machine, Start With An Equation - Liwaiwai
Intelligence is a very difficult concept and, until recently, no one has succeeded in giving it a satisfactory formal definition. Most researchers have given up grappling with the notion of intelligence in full generality, and instead focus on related but more limited concepts – but I argue that mathematically defining intelligence is not only possible, but crucial to understanding and developing super-intelligent machines. From this, my research group has even successfully developed software that can learn to play Pac-Man from scratch. Let me explain – but first, we need to define "intelligence". I have worked on the question of general rational intelligence for many years.
Why Elon Musk is Wrong about AI – Hacker Noon
AI will rise up and kill us all. Didn't Facebook have to shut down their latest monstrous experiment because it went rogue and developed its own secret language? For all we know, Skynet's factories are cranking out an army of Terminators already! The only problem is, it's all nonsense. It's an "existential threat worse than North Korea," he warns. Last I checked they have nukes and a little madman in power and super-AI is still confined to the pages of cyberpunk novels, so I'm not buying it. Look, the guy is a lot smarter than me and I think his batteries, cars and solar roof tiles will change the world but he's spent a little too much time watching 2001: A Space Odyssey.
- North America > United States (0.47)
- Asia > North Korea (0.24)
- Asia > China (0.05)
- Europe > Germany (0.04)
- Government (1.00)
- Health & Medicine (0.70)
- Banking & Finance > Economy (0.47)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.47)
What to do about robots and artificial intelligence
Two weeks ago, I wrote in a column headlined "Please stop saying I'll be replaced by a robot" that fears over machines making humans redundant were often overblown and might well be self-defeating, and that it was better to focus instead on what robots could help people do better. Some readers welcomed what I had to say, others disagreed. One thoughtful rejoinder by Ms Ethel Tan Hui Yan was published in The Straits Times Forum under the headline "Let's not take AI lightly". Ms Tan wrote: "At present, robots have attained the level of intelligence where they help more than harm... But when intuitive AI (artificial intelligence) robots with a learning capacity that far exceeds that of man's are developed, they could very well make humans redundant in many jobs. "It is essential to talk about the augmented economy and the necessary industrial restructuring and educational reforms that need to be made, which will equip our workforce and prepare our younger generation for the challenge ahead." Mr Frederick Wong said in an e-mail that "AI will impact us deeply in the coming decade and not the next century as some might think.
- Asia > South Korea (0.05)
- Asia > China > Henan Province > Zhengzhou (0.05)
The two most important technologies of our time are also the most dangerous
The entirety of human knowledge has been leading to this point. Information technologies and life sciences are at an inflection. Two technologies that are the pinnacle of achievement in their domains are going mainstream. In the IT world, it's Artificial Intelligence (AI), super-powerful computers that can program themselves and learn without the assistance of humans. In Life Sciences, it's Gene Editing (CRISPR/Cas9), the ability to reprogram genomes and change the course of evolution.
- North America > United States > California (0.14)
- Asia > China (0.05)
- North America > United States > District of Columbia > Washington (0.04)
- Information Technology > Security & Privacy (0.95)
- Information Technology > Artificial Intelligence > Robots (0.95)
Humans still matter when it comes to artificial intelligence
From Google's self-driving cars to Amazon's purchase predictions, artificial intelligence (AI) is any program that does something we would normally consider an intelligent human act. But as AI technology continues to develop rapidly, prominent personalities including Stephen Hawking and Bill Gates have voiced their concern about the rise of super-intelligent machines. They question they ask is: "How dangerous could AI become?" We've all watched at least one science fiction movie where an intelligent robot goes rogue and tries to destroy all humanity. And while we certainly aren't ignoring the valid concerns raised about super-intelligent machines, it is possible that AI and humans can be complementary.
- Europe > United Kingdom (0.17)
- Africa (0.06)
Smart Machines Are Not a Threat to Humanity
Concerns have recently been widely expressed that artificial intelligence presents a threat to humanity. For instance, Stephen Hawking is quoted in Cellan-Jones1 as saying: "The development of full artificial intelligence could spell the end of the human race." Similar concerns have also been expressed by Elon Musk, Steve Wozniak, and others. Such concerns have a long history. John von Neumann is quoted by Stanislaw Ulam8 as the first to use the term the singularitya--the point at which artificial intelligence exceeds human intelligence.
- North America > United States > Illinois (0.05)
- Europe > United Kingdom > Scotland > City of Edinburgh > Edinburgh (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.05)